Skip to content

Conversation

saksham36
Copy link
Contributor

No description provided.

@SBrandeis
Copy link
Contributor

SBrandeis commented Feb 10, 2025

Let's add some tests!

You can find examples here:

describe.concurrent(
"Fal AI",
() => {
const client = new HfInference(env.HF_FAL_KEY);
it(`textToImage - black-forest-labs/FLUX.1-schnell`, async () => {
const res = await client.textToImage({
model: "black-forest-labs/FLUX.1-schnell",
provider: "fal-ai",
inputs:
"Extreme close-up of a single tiger eye, direct frontal view. Detailed iris and pupil. Sharp focus on eye texture and color. Natural lighting to capture authentic eye shine and depth.",
});
expect(res).toBeInstanceOf(Blob);
});
it(`automaticSpeechRecognition - openai/whisper-large-v3`, async () => {
const res = await client.automaticSpeechRecognition({
model: "openai/whisper-large-v3",
provider: "fal-ai",
data: new Blob([readTestFile("sample2.wav")], { type: "audio/x-wav" }),
});
expect(res).toMatchObject({
text: " he has grave doubts whether sir frederick leighton's work is really greek after all and can discover in it but little of rocky ithaca",
});
});
it("textToVideo - genmo/mochi-1-preview", async () => {
const res = await textToVideo({
model: "genmo/mochi-1-preview",
inputs: "A running dog",
parameters: {
seed: 176,
},
provider: "fal-ai",
accessToken: env.HF_FAL_KEY,
});
expect(res).toBeInstanceOf(Blob);
});
it("textToVideo - HunyuanVideo", async () => {
const res = await textToVideo({
model: "tencent/HunyuanVideo",
inputs: "A running dog",
parameters: {
seed: 176,
num_inference_steps: 2,
num_frames: 85,
resolution: "480p",
},
provider: "fal-ai",
accessToken: env.HF_FAL_KEY,
});
expect(res).toBeInstanceOf(Blob);
});
it("textToVideo - CogVideoX-5b", async () => {
const res = await textToVideo({
model: "THUDM/CogVideoX-5b",
inputs: "A running dog",
parameters: {
seed: 176,
num_frames: 2,
},
provider: "fal-ai",
accessToken: env.HF_FAL_KEY,
});
expect(res).toBeInstanceOf(Blob);
});
it("textToVideo - LTX-Video", async () => {
const res = await textToVideo({
model: "Lightricks/LTX-Video",
inputs: "A running dog",
parameters: {
seed: 176,
num_inference_steps: 2,
},
provider: "fal-ai",
accessToken: env.HF_FAL_KEY,
});
expect(res).toBeInstanceOf(Blob);
});
},
TIMEOUT
);

You will need to override the hf model id -> black forest labs id mapping similarily to what is done here:

HARDCODED_MODEL_ID_MAPPING.nebius = {
"meta-llama/Llama-3.1-8B-Instruct": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"meta-llama/Llama-3.1-70B-Instruct": "meta-llama/Meta-Llama-3.1-70B-Instruct",
"black-forest-labs/FLUX.1-schnell": "black-forest-labs/flux-schnell",
};

^ keys of this dict must match the name of a model on the HF Hub


Once you've added tests, you can update the VCR tapes (= cached API responses for offline testing) by running this command:

VCR_MODE=cache pnpm run test

@SBrandeis SBrandeis self-assigned this Feb 10, 2025
@julien-c
Copy link
Member

@saksham36 let us know if any help is needed!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@SBrandeis SBrandeis requested a review from julien-c February 12, 2025 15:24
Comment on lines +89 to +95
if (provider === "fal-ai" && authMethod === "provider-key") {
headers["Authorization"] = `Key ${accessToken}`;
} else if (provider === "black-forest-labs" && authMethod === "provider-key") {
headers["X-Key"] = accessToken;
} else {
headers["Authorization"] = `Bearer ${accessToken}`;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

@julien-c julien-c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @saksham36 for starting the PR and thanks @SBrandeis for pushing it over the finish line! let's go!!

@SBrandeis SBrandeis merged commit 62e314a into huggingface:main Feb 13, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants